Skip to content

Switch to terminaltables3#1102

Merged
NeffIsBack merged 2 commits intoPennyw0rth:mainfrom
elboulangero:terminaltables3
Mar 11, 2026
Merged

Switch to terminaltables3#1102
NeffIsBack merged 2 commits intoPennyw0rth:mainfrom
elboulangero:terminaltables3

Conversation

@elboulangero
Copy link
Contributor

From terminaltables3 README:

[terminaltables3] is a fork of the terminaltables project. Which is
archived and unmaintained. This library is in a new namespace but should
otherwise be a drop in replacement. Maintaining goals consist of
maintaining ecosystem compatibility, type annotations and responding to
community pull requests.

Debian has removed the old terminaltables projects and only provides terminaltables3 now (in next release Debian 14 forky, which is currently Debian testing). Other distros will probably do the same, or have done it already.

Note: I didn't test the change, and I updated poetry.lock by hand. Probably someone wants to double-check that what I did is correct.

From terminaltables3 README:

> [terminaltables3] is a fork of the terminaltables project. Which is
> archived and unmaintained. This library is in a new namespace but should
> otherwise be a drop in replacement. Maintaining goals consist of
> maintaining ecosystem compatibility, type annotations and responding to
> community pull requests.

Debian has removed the old terminaltables projects and only provides
terminaltables3 now (in next release Debian 14 forky, which is currently
Debian testing). Other distros will probably do the same, or have done
it already.
@Marshall-Hallenbeck
Copy link
Collaborator

Hey thanks for the PR but if you are going to submit something, please test it before doing so. We have a PR template that you should also follow, and one of those steps is testing it yourself before submitting. I will be marking this as on hold until you can fill out the template and test this change for anything breaking.

@Marshall-Hallenbeck Marshall-Hallenbeck added on hold dependencies Pull requests that update a dependency file labels Feb 10, 2026
@NeffIsBack
Copy link
Member

Thanks for the update, gonna check and test it soon!

On a different note, glad that I can catch you here :)
Is there some way I can get in touch with you or someone else on the kali team regarding future releases on NetExec? I think it would really help with stability if I could get in touch with you guys when we prepare for updates. To be clear, I don't want to skip the issue process (I understand that this is important), just having a contact to quickly chat about building, testing and uploading the binaries would be really good to prevent things from breaking several times like in the last update.

@elboulangero
Copy link
Contributor Author

Is there some way I can get in touch with you or someone else on the kali team regarding future releases on NetExec? I think it would really help with stability if I could get in touch with you guys when we prepare for updates. To be clear, I don't want to skip the issue process (I understand that this is important), just having a contact to quickly chat about building, testing and uploading the binaries would be really good to prevent things from breaking several times like in the last update.

Really the best contact point is https://bugs.kali.org/. The Kali team keeps an eye on the bug tracker daily.

Regarding testing: we can upload new major releases of netexec to kali-experimental if you want. Just remind us, ie. you could open a new bug on bugs.kali.org to notify us of a new release, and ask for it to be uploaded to kali-experimental first.

The purpose of kali-experimental is exactly that: to upload packages that we want to test before rolling it out. However, packages in kali-experimental are not very visible, nor advertised, so it's only people in the loop (eg. NetExec team) that are likely to know about it and test it.

Another thing: when a package is built for Kali, we run unit tests, and tests must pass for the build to succeed.So there's a level of QA there, but it's only as good as your own tests. A good way to increase quality is usually to increase test coverage. At the moment the command we run is python3.13 -m pytest -k 'not test_add_host and not test_update_host'.

@NeffIsBack
Copy link
Member

NeffIsBack commented Feb 13, 2026

Is there some way I can get in touch with you or someone else on the kali team regarding future releases on NetExec? I think it would really help with stability if I could get in touch with you guys when we prepare for updates. To be clear, I don't want to skip the issue process (I understand that this is important), just having a contact to quickly chat about building, testing and uploading the binaries would be really good to prevent things from breaking several times like in the last update.

Really the best contact point is https://bugs.kali.org/. The Kali team keeps an eye on the bug tracker daily.

If you say so we can continue like that. Just, sometimes the communication over the bug tracker felt sluggish, where things that (felt like) could have been solved with a few instant messages took days or even weeks, resulting in NetExec Kali being update much later than the original release. My goal would be to that kali is ready for the update at approximately the same time as the release is. I know you guys got a lot to do, but were just wondering how to speed things up, like chatting about which dependencies have to be upgraded etc. (I am absolutely happy to feed issues with the TLDR; that has been discussed.)

Regarding testing: we can upload new major releases of netexec to kali-experimental if you want. Just remind us, ie. you could open a new bug on bugs.kali.org to notify us of a new release, and ask for it to be uploaded to kali-experimental first.

The purpose of kali-experimental is exactly that: to upload packages that we want to test before rolling it out. However, packages in kali-experimental are not very visible, nor advertised, so it's only people in the loop (eg. NetExec team) that are likely to know about it and test it.

That sounds good, then i can properly test the binary before it goes live, hopefully preventing stuff like the last few bugs from happening.

Another thing: when a package is built for Kali, we run unit tests, and tests must pass for the build to succeed.So there's a level of QA there, but it's only as good as your own tests. A good way to increase quality is usually to increase test coverage. At the moment the command we run is python3.13 -m pytest -k 'not test_add_host and not test_update_host'.

Oh so you are using the tests in tests/test_smb_database.py from NetExec? Testing is a difficult topic especially with a versatile tool like NetExec. For now, the biggest testing suite we have are the e2e tests which I usually run before doing releases/building the binaries. These cover a lot of execution chains, however requiring a target domain (or service) to be run against. If you want to expand the testing suite we are happy to help/collaborate.

@Marshall-Hallenbeck
Copy link
Collaborator

Oh so you are using the tests in tests/test_smb_database.py from NetExec? Testing is a difficult topic especially with a versatile tool like NetExec. For now, the biggest testing suite we have are the e2e tests which I usually run before doing releases/building the binaries. These cover a lot of execution chains, however requiring a target domain (or service) to be run against. If you want to expand the testing suite we are happy to help/collaborate.

I've been looking into the tests and how we can potentially add in unit tests with a bunch of mocking. Might be a lot but I've had decent results with the new Opus 4.6 with writing tests on existing code, so I'm going to play around more with it.

@elboulangero when you run the unit tests, is pytest a requirement?

@elboulangero
Copy link
Contributor Author

elboulangero commented Feb 27, 2026

@elboulangero when you run the unit tests, is pytest a requirement?

@Marshall-Hallenbeck No. It's just that the packaging tools for Python automatically guessed that, and that seems to be the correct guess, if I do a quick grep on the netexec repo.

But no, that's not a requirement. If the packaging tools don't guess it right, we can override that and define exactly what command to run for tests.

@elboulangero
Copy link
Contributor Author

elboulangero commented Feb 27, 2026

@NeffIsBack

Oh so you are using the tests in tests/test_smb_database.py from NetExec?

Yes this is run when the package is built, so if it fails the package doesn't build and we have to look into it and fix it.

For now, the biggest testing suite we have are the e2e tests

So I guess I could run that as part of our regressions tests...

however requiring a target domain (or service) to be run against.

Ah, no, I can't.


Generally speaking, E2E tests is the kind of thing we'd run for regression tests, that we call autopkgtests in Debian-speak. autopkgtests run before the package (netexec) enters kali-rolling, and runs again every time a dependency of netexec changes in kali-dev. If this new dependency breaks the autopkgtests of netexec, then it won't enter kali-rolling (to avoid breaking netexec)

(note: kali-dev is the suite that receives new packages - then they need to clear some QA before they can migrate to the user-facing suite kali-rolling)

Another way to say the paragraph above: autopkgtests for Debian/Kali is really something that runs at scale: it runs often, and for many packages. There's a huge volume of tests running at any time. Which means that these tests must be very reliable (no false positive, no flaky tests), which usually means: only use the local network.

I know it's challenging for network tool to be tested without accessing online resources.

So what I'm trying to say is: I'm not 100% against running E2E tests, with network access, as part of our regression tests. But in practice, experience has shown that we often have to disable those tests, because they will always fail due to network condition (false positive), and therefore block package migrations for no reason. It ends up being more pain than gain, so it doesn't work for us.


What works best for us is: offline tests, unit tests, mocking. Anything that is very reliable and can run at scale.

Hope that clarifies a bit how package testing works in Kali, and to some extent in Debian and other Linux distros.

@elboulangero
Copy link
Contributor Author

Hey thanks for the PR but if you are going to submit something, please test it before doing so. We have a PR template that you should also follow, and one of those steps is testing it yourself before submitting. I will be marking this as on hold until you can fill out the template and test this change for anything breaking.

@Marshall-Hallenbeck Sorry I didn't answer that yet, it was the topic of this MR.

I'm not familiar with netexec and I don't have a setup to test this particular change.

I don't think it's a big change, terminaltables3 is said to be a drop-in replacement, so it's more like a dependency bump of terminaltables.

Worth noting: I proposed the same change to another project who noted (rightly so) that terminaltables3 isn't really maintained either, and they consider switching to another thing: BC-SECURITY/Empire#809

With all that said: I completely understand that you don't want to merge this MR, and don't have the time to test it yourself, and that's 100% fine. Don't hesitate to close this MR.

@Marshall-Hallenbeck
Copy link
Collaborator

I don't think it's a big change, terminaltables3 is said to be a drop-in replacement, so it's more like a dependency bump of terminaltables.

Famous last words 😂

Worth noting: I proposed the same change to another project who noted (rightly so) that terminaltables3 isn't really maintained either, and they consider switching to another thing: BC-SECURITY/Empire#809

Cool, good to know, thanks. We can check out where terminaltables is used and see if this alternative works for us too. We generally don't want deprecated packages being used either.

@NeffIsBack
Copy link
Member

NeffIsBack commented Mar 4, 2026

So what I'm trying to say is: I'm not 100% against running E2E tests, with network access, as part of our regression tests. But in practice, experience has shown that we often have to disable those tests, because they will always fail due to network condition (false positive), and therefore block package migrations for no reason. It ends up being more pain than gain, so it doesn't work for us.

Thanks for the explanation, that absolutely makes sense! I experience random failures all the time when running the tests due to instable infrastructure. The best solution for future releases is likely that i run the tests once the new version is in kali-dev/experimental (wherever this is staged to) and manually verify that the binary works as intended.

Regarding unit tests: As you have said, covering the real world with unit tests is very hard and since we have concentrated on adding an e2e test suite, our amount of unit tests is very small. @Marshall-Hallenbeck might expand on this in the future, but for now I don't think I will have the time to take care of these. The best solution is likely as said above, I just manually run the e2e tests once the binary is staged into kali-experimental.

With all that said: I completely understand that you don't want to merge this MR, and don't have the time to test it yourself, and that's 100% fine. Don't hesitate to close this MR.

No worries, I will test it as soon as I am available.

Copy link
Member

@NeffIsBack NeffIsBack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is fine for now, even if the package isn't that well maintained either. As long as we stay compatible with debian we achieved the goal and currently it seems to work as is. LGTM:

Image

@NeffIsBack NeffIsBack merged commit 26465b1 into Pennyw0rth:main Mar 11, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file on hold

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants